Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Anaerobic digestion (AD) is a well-established waste-to-value technology commonly used at water resource recovery facilities (WRRFs), generating biogas from organic waste. However, the generated biogas is typically used only for heat and electricity generation due to contaminants, while the nutrient-rich AD effluent requires further treatment before environmental release. Methanotroph-microalgae cocultures have recently emerged as promising candidates for integrated biogas valorization and nutrient recovery. Although the choice of the coculture pairs is one of the most important factors that determine the performance of the application, there have not been any results on the comparison or screening of different coculture pairs for a desired application. To expedite the screening of methanotroph-microalgae cocultures for optimal performance, we developed a cost-effective screening system consisting of nine parallel bioreactors. The compact design of the system allows it to fit in a fume hood, and enables the simultaneous evaluation of multiple species with triplicates under uniformly controlled conditions. The system was applied to screen seven methanotrophs, five microalgae, and six methanotroph-microalgae coculture pairs on a diluted AD effluent from a local WRRF. To systematically assess the growth performance of different monocultures and cocultures, mathematical models that describe the microbial growth under batch cultivation were developed to determine the maximum growth rate, delay time, and carrying capacity from growth data, allowing for consistent and systematic assessment of different species, as well as the identification of the coculture pairs with synergistic and inhibitory interactions. The developed experimental system and modeling approach enabled expedited strain screening and unbiased assessment for integrated biogas valorization and nutrient recovery. Specifically, the cost of each bioreactor system in S3 is less than 5% of commercially available bioreactor system (such as Bioflo 120), while the screening throughput of S3 is 9 times that of a single bioreactor system. In addition, the identified synergistic cocultures demonstrate potential for scalable biogas valorization and nutrient recovery in wastewater treatment.more » « lessFree, publicly-accessible full text available August 1, 2026
-
Indirect function calls are widely used in building system software like OS kernels for their high flexibility and performance. Statically resolving indirect-call targets has been known to be a hard problem, which is a fundamental requirement for various program analysis and protection tasks. The state-of-the-art techniques, which use type analysis, are still imprecise. In this paper, we present a new approach, TFA, that precisely identifies indirect-call targets. The intuition behind TFA is that type-based analysis and data-flow analysis are inherently complementary in resolving indirect-call targets. TFA incorporates a co-analysis system that makes the best use of both type information and data-flow information. The co-analysis keeps refining the global call graph iteratively, allowing us to achieve an optimal indirect call analysis. We have implemented TFA in LLVM and evaluated it against five famous large-scale programs. The experimental results show that TFA eliminates additional 24% to 59% of indirect-call targets compared with the state-of-the-art approaches, without introducing new false negatives. With the precise indirect-call analysis, we further developed a strengthened fine-grained forward-edge control-flow integrity scheme and applied it to the Linux kernel. We have also used the refined indirect-call analysis results in bug detection, where we found 8 deep bugs in the Linux kernel. As a generic technique, the precise indirect-call analysis of TFA can also benefit other applications such as compiler optimization and software debloating.more » « less
-
Aim: Metabolic interactions within a microbial community play a key role in determining the structure, function, and composition of the community. However, due to the complexity and intractability of natural microbiomes, limited knowledge is available on interspecies interactions within a community. In this work, using a binary synthetic microbiome, a methanotroph-photoautotroph (M-P) coculture, as the model system, we examined different genome-scale metabolic modeling (GEM) approaches to gain a better understanding of the metabolic interactions within the coculture, how they contribute to the enhanced growth observed in the coculture, and how they evolve over time. Methods: Using batch growth data of the model M-P coculture, we compared three GEM approaches for microbial communities. Two of the methods are existing approaches: SteadyCom, a steady state GEM, and dynamic flux balance analysis (DFBA) Lab, a dynamic GEM. We also proposed an improved dynamic GEM approach, DynamiCom, for the M-P coculture. Results: SteadyCom can predict the metabolic interactions within the coculture but not their dynamic evolutions; DFBA Lab can predict the dynamics of the coculture but cannot identify interspecies interactions. DynamiCom was able to identify the cross-fed metabolite within the coculture, as well as predict the evolution of the interspecies interactions over time. Conclusion: A new dynamic GEM approach, DynamiCom, was developed for a model M-P coculture. Constrained by the predictions from a validated kinetic model, DynamiCom consistently predicted the top metabolites being exchanged in the M-P coculture, as well as the establishment of the mutualistic N-exchange between the methanotroph and cyanobacteria. The interspecies interactions and their dynamic evolution predicted by DynamiCom are supported by ample evidence in the literature on methanotroph, cyanobacteria, and other cyanobacteria-heterotroph cocultures.more » « less
-
Pressure swing adsorption (PSA) is a widely used technology to separate a gas product from impurities in a variety of fields. Due to the complexity of PSA operations, process and instrument faults can occur at different parts and/or steps of the process. Thus, effective process monitoring is critical for ensuring efficient and safe operations of PSA systems. However, multi-bed PSA processes present several major challenges to process monitoring. First, a PSA process is operated in a periodic or cyclic fashion and never reaches a steady state; Second, the duration of different operation cycles is dynamically controlled in response to various disturbances, which results in a wide range of normal operation trajectories. Third, there is limited data for process monitoring, and bed pressure is usually the only measured variable for process monitoring. These key characteristics of the PSA operation make process monitoring, especially early fault detection, significantly more challenging than that for a continuous process operated at a steady state. To address these challenges, we propose a feature-based statistical process monitoring (SPM) framework for PSA processes, namely feature space monitoring (FSM). Through feature engineering and feature selection, we show that FSM can naturally handle the key challenges in PSA process monitoring and achieve early detection of subtle faults from a wide range of normal operating conditions. The performance of FSM is compared to the conventional SPM methods using both simulated and real faults from an industrial PSA process. The results demonstrate FSM’s superior performance in fault detection and fault diagnosis compared to the traditional SPM methods. In particular, the robust monitoring performance from FSM is achieved without any data preprocessing, trajectory alignment or synchronization required by the conventional SPM methods.more » « less
-
null (Ed.)In the past few decades, we have witnessed tremendous advancements in biology, life sciences and healthcare. These advancements are due in no small part to the big data made available by various high-throughput technologies, the ever-advancing computing power, and the algorithmic advancements in machine learning. Specifically, big data analytics such as statistical and machine learning has become an essential tool in these rapidly developing fields. As a result, the subject has drawn increased attention and many review papers have been published in just the past few years on the subject. Different from all existing reviews, this work focuses on the application of systems, engineering principles and techniques in addressing some of the common challenges in big data analytics for biological, biomedical and healthcare applications. Specifically, this review focuses on the following three key areas in biological big data analytics where systems engineering principles and techniques have been playing important roles: the principle of parsimony in addressing overfitting, the dynamic analysis of biological data, and the role of domain knowledge in biological data analytics.more » « less
An official website of the United States government

Full Text Available